Canada Government
AI floods Amazon with strange political books before Canadian election
Canada has seen a boom in political books created with generative artificial intelligence, adding to concerns about how new technologies are affecting the information voters receive during the election campaign. Canadian Prime Minister Mark Carney was the subject of at least 16 books published in March and listed on Amazon, according to a review of the site on April 16. Five of those were published on a single day. In total, some 30 titles were published about Carney this year and made available on Amazon -- but most were taken down from the site after inquiries were made.
Global disunity, energy concerns and the shadow of Musk: key takeaways from the Paris AI summit
A speech by the US vice-president, JD Vance, symbolised a fracturing consensus on how to approach AI. He attended the summit with other global leaders, including the Indian prime minister, Narendra Modi, the Canadian PM, Justin Trudeau, and the head of the European Commission, Ursula von der Leyen. In his speech in the Grand Palais, Vance made it clear the US was not going to be held back from developing the tech by global regulation or an excessive focus on safety. "We need international regulatory regimes that foster the creation of AI technology rather than strangle it, and we need our European friends, in particular, to look to this new frontier with optimism rather than trepidation," he said. Speaking in front of the country's vice-premier, Zhang Guoqing, Vance warned his peers against cooperating with "authoritarian" regimes – in a clear reference to Beijing.
Evaluating the Quality of Answers in Political Q&A Sessions with Large Language Models
Alvarez, R. Michael, Morrier, Jacob
This paper presents a new approach to evaluating the quality of answers in political question-and-answer sessions. We propose to measure an answer's quality based on the degree to which it allows us to infer the initial question accurately. This conception of answer quality inherently reflects their relevance to initial questions. Drawing parallels with semantic search, we argue that this measurement approach can be operationalized by fine-tuning a large language model on the observed corpus of questions and answers without additional labeled data. We showcase our measurement approach within the context of the Question Period in the Canadian House of Commons. Our approach yields valuable insights into the correlates of the quality of answers in the Question Period. We find that answer quality varies significantly based on the party affiliation of the members of Parliament asking the questions and uncover a meaningful correlation between answer quality and the topics of the questions.
Enhancing Lithological Mapping with Spatially Constrained Bayesian Network (SCB-Net): An Approach for Field Data-Constrained Predictions with Uncertainty Evaluation
Santos, Victor Silva dos, Gloaguen, Erwan, Tirdad, Shiva
Geological maps are an extremely valuable source of information for the Earth sciences. They provide insights into mineral exploration, vulnerability to natural hazards, and many other applications. These maps are created using numerical or conceptual models that use geological observations to extrapolate data. Geostatistical techniques have traditionally been used to generate reliable predictions that take into account the spatial patterns inherent in the data. However, as the number of auxiliary variables increases, these methods become more labor-intensive. Additionally, traditional machine learning methods often struggle with spatially correlated data and extracting valuable non-linear information from geoscientific datasets. To address these limitations, a new architecture called the Spatially Constrained Bayesian Network (SCB-Net) has been developed. The SCB-Net aims to effectively exploit the information from auxiliary variables while producing spatially constrained predictions. It is made up of two parts, the first part focuses on learning underlying patterns in the auxiliary variables while the second part integrates ground-truth data and the learned embeddings from the first part. Moreover, to assess model uncertainty, a technique called Monte Carlo dropout is used as a Bayesian approximation. The SCB-Net has been applied to two selected areas in northern Quebec, Canada, and has demonstrated its potential in generating field-data-constrained lithological maps while allowing assessment of prediction uncertainty for decision-making. This study highlights the promising advancements of deep neural networks in geostatistics, particularly in handling complex spatial feature learning tasks, leading to improved spatial information techniques.
Editing Massive Concepts in Text-to-Image Diffusion Models
Xiong, Tianwei, Wu, Yue, Xie, Enze, Wu, Yue, Li, Zhenguo, Liu, Xihui
While previous methods have mitigated the issues on a small scale, it is essential to handle them simultaneously in larger-scale real-world scenarios. We propose a two-stage method, Editing Massive Concepts In Diffusion Models (EMCID). The first stage performs memory optimization for each individual concept with dual self-distillation from text alignment loss and diffusion noise prediction loss. The second stage conducts massive concept editing with multi-layer, closed form model editing. We further propose a comprehensive benchmark, named ImageNet Concept Editing Benchmark (ICEB), for evaluating massive concept editing for T2I models with two subtasks, free-form prompts, massive concept categories, and extensive evaluation metrics. Extensive experiments conducted on our proposed benchmark and previous benchmarks demonstrate the superior scalability of EMCID for editing up to 1,000 concepts, providing a practical approach for fast adjustment and re-deployment of T2I diffusion models in real-world applications.
Coast Guard to lead transnational investigation into Titan implosion accountability
A transnational inquiry has been launched to determine accountability for the deaths of five passengers aboard the OceanGate Expeditions submersible that imploded during a descent to the wreckage of the Titanic in the North Atlantic, the United States Coast Guard announced Sunday. Maritime agencies from Canada, France and Britain are joining an investigation that will be led by the Coast Guard, Capt. Jason Neubauer said during a news conference at Coast Guard Base Boston. Neubauer said the priority of the investigation, known as a Marine Board of Investigation, or MBI, "is to recover items from the seafloor." Neubauer said investigators will also determine "the cause of this marine casualty" and establish accountability.
Laplacian Change Point Detection for Single and Multi-view Dynamic Graphs
Huang, Shenyang, Coulombe, Samy, Hitti, Yasmeen, Rabbany, Reihaneh, Rabusseau, Guillaume
Dynamic graphs are rich data structures that are used to model complex relationships between entities over time. In particular, anomaly detection in temporal graphs is crucial for many real world applications such as intrusion identification in network systems, detection of ecosystem disturbances and detection of epidemic outbreaks. In this paper, we focus on change point detection in dynamic graphs and address three main challenges associated with this problem: i). how to compare graph snapshots across time, ii). how to capture temporal dependencies, and iii). how to combine different views of a temporal graph. To solve the above challenges, we first propose Laplacian Anomaly Detection (LAD) which uses the spectrum of graph Laplacian as the low dimensional embedding of the graph structure at each snapshot. LAD explicitly models short term and long term dependencies by applying two sliding windows. Next, we propose MultiLAD, a simple and effective generalization of LAD to multi-view graphs. MultiLAD provides the first change point detection method for multi-view dynamic graphs. It aggregates the singular values of the normalized graph Laplacian from different views through the scalar power mean operation. Through extensive synthetic experiments, we show that i). LAD and MultiLAD are accurate and outperforms state-of-the-art baselines and their multi-view extensions by a large margin, ii). MultiLAD's advantage over contenders significantly increases when additional views are available, and iii). MultiLAD is highly robust to noise from individual views. In five real world dynamic graphs, we demonstrate that LAD and MultiLAD identify significant events as top anomalies such as the implementation of government COVID-19 interventions which impacted the population mobility in multi-view traffic networks.
Canada: twelve new AI projects and $50M investment for SCALE AI - Actu IA
Canadian supercluster based in Montreal, SCALE AI acts as an investment and innovation hub to accelerate the adoption and rapid integration of AI in Canada. This Monday, August 22, it unveiled twelve new projects aimed at optimizing production and transportation through AI. With the goal of addressing critical challenges currently facing supply chains, including the impact of the pandemic, labor shortages, and environmental requirements, it will provide $50 million in unprecedented financial support. Funded by the federal and Quebec governments, SCALE AI brings together the retail, manufacturing, transportation, infrastructure and information and communications technology (ICT) sectors to build smart supply chains. The supercluster has nearly 500 industrial partners, research institutions and other AI players with whom it develops programs to support investment projects by companies implementing concrete AI applications.
Canada: What's in the second phase of the Pan-Canadian Artificial Intelligence Strategy? - Actu IA
A few weeks ago, François-Philippe Champagne, Canada's Minister of Innovation, Science and Industry, announced the launch of the second phase of the Pan-Canadian Artificial Intelligence Strategy. This second phase, which is expected to benefit from an investment of more than $443 million, aims to attract the best talent, increase cutting-edge research capacity, and foster the commercialization and adoption of AI. In 2017, the Canadian government was the first country to establish a national AI strategy. CIFAR, the Canadian Institute for Advanced Research, was tasked with developing and leading this pan-Canadian strategy, initially funded with $125 million. CIFAR is working closely with the three national AI institutes: Mila in Montreal, the Vector Institute in Toronto and Amii in Edmonton, as well as with Canadian universities, hospitals and other organizations.
IBM working with Quebec gov't to deploy quantum system in Canada
IBM announced on Thursday that it is planning to open the first IBM Quantum System in Canada as part of a larger partnership with the Quebec government. IBM will be working with the government on a variety of initiatives that include quantum computing, AI, cloud, and research projects. The Quebec-IBM Discovery Accelerator will help researchers with the government and the Universite de Sherbrooke work on projects in areas like semiconductors, energy, sustainability and life sciences. François Legault, Premier of Quebec, said acquiring an IBM quantum computer will pave the way for them "to make incredible progress in areas such as artificial intelligence and modeling." "Quebec's potential to innovate in high technology and be a leader in the economy of the future is immense. We have world-class universities, creative entrepreneurs, and talented workers," Legault said.